翻訳と辞書
Words near each other
・ Neyla Moronta
・ Neylak
・ Neylak, Kermanshah
・ Neylan McBaine
・ Neyland
・ Neyland railway station
・ Neyland RFC
・ Neyland Stadium
・ Neylandville Marl
・ Neylandville, Texas
・ Neylon
・ Neylon v Dickens
・ Neylor Lopes Gonçalves
・ Neyma
・ Neyman construction
Neyman–Pearson lemma
・ Neymar
・ Neymer Miranda
・ NeyNava
・ Neynava truck
・ Neyneh
・ Neynizak
・ Neyo
・ Neyo language
・ Neypahan
・ Neyrang
・ Neyraudia
・ Neyraudia reynaudiana
・ Neyrazh
・ Neyrazh-e Olya


Dictionary Lists
翻訳と辞書 辞書検索 [ 開発暫定版 ]
スポンサード リンク

Neyman–Pearson lemma : ウィキペディア英語版
Neyman–Pearson lemma
In statistics, the Neyman–Pearson lemma, named after Jerzy Neyman and Egon Pearson, states that when performing a hypothesis test between two simple hypotheses ''H''0: ''θ'' = ''θ''0 and ''H''1: ''θ'' = ''θ''1, the likelihood-ratio test which rejects ''H''0 in favour of ''H''1 when
:\Lambda(x)=\frac \leq \eta
where
:P(\Lambda(X)\leq \eta\mid H_0)=\alpha
is the most powerful test at significance level ''α'' for a threshold η. If the test is most powerful for all \theta_1 \in \Theta_1, it is said to be uniformly most powerful (UMP) for alternatives in the set \Theta_1 \, .
In practice, the likelihood ratio is often used directly to construct tests — see Likelihood-ratio test. However it can also be used to suggest particular test-statistics that might be of interest or to suggest simplified tests — for this, one considers algebraic manipulation of the ratio to see if there are key statistics in it related to the size of the ratio (i.e. whether a large statistic corresponds to a small ratio or to a large one).
== Proof ==
Define the rejection region of the null hypothesis for the NP test as
:R_=\left\ \leq \eta\right\}
where \eta is chosen so that P(R_,\theta_0)=\alpha\,.
Any other test will have a different rejection region that we define as R_A. Furthermore, define the probability of the data falling in region R, given parameter \theta as
:P(R,\theta)=\int_R L(\theta|x)\, dx,
For the test with critical region R_A to have level \alpha, it must be true that \alpha \ge P(R_A, \theta_0) , hence
:\alpha= P(R_, \theta_0) \ge P(R_A, \theta_0) \,.
It will be useful to break these down into integrals over distinct regions:
:P(R_,\theta) = P(R_ \cap R_A, \theta) + P(R_ \cap R_A^c, \theta),
and
:P(R_A,\theta) = P(R_ \cap R_A, \theta) + P(R_^c \cap R_A, \theta).
Setting \theta=\theta_0, these two expressions and the above inequality yield that
:P(R_ \cap R_A^c, \theta_0) \ge P(R_^c \cap R_A, \theta_0).
Comparing the powers of the two tests, P(R_,\theta_1) and P(R_A,\theta_1), one can see that
:P(R_,\theta_1) \geq P(R_A,\theta_1) \iff
P(R_ \cap R_A^c, \theta_1) \geq P(R_^c \cap R_A, \theta_1).
Now by the definition of R_,
: P(R_ \cap R_A^c, \theta_1)= \int_ L(\theta_|x)\,dx \geq \frac \int_ L(\theta_0|x)\,dx = \fracP(R_ \cap R_A^c, \theta_0)
: \ge \fracP(R_^c \cap R_A, \theta_0) = \frac\int_ L(\theta_|x)\,dx \geq \int_ L(\theta_|x)dx = P(R_^c \cap R_A, \theta_1).
Hence the inequality holds.

抄文引用元・出典: フリー百科事典『 ウィキペディア(Wikipedia)
ウィキペディアで「Neyman–Pearson lemma」の詳細全文を読む



スポンサード リンク
翻訳と辞書 : 翻訳のためのインターネットリソース

Copyright(C) kotoba.ne.jp 1997-2016. All Rights Reserved.